25 research outputs found

    On Robotic Work-Space Sensing and Control

    Get PDF
    Industrial robots are fast and accurate when working with known objects at precise locations in well-structured manufacturing environments, as done in the classical automation setting. In one sense, limited use of sensors leaves robots blind and numb, unaware of what is happening in their surroundings. Whereas equipping a system with sensors has the potential to add new functionality and increase the set of uncertainties a robot can handle, it is not as simple as that. Often it is difficult to interpret the measurements and use them to draw necessary conclusions about the state of the work space. For effective sensor-based control, it is necessary to both understand the sensor data and to know how to act on it, giving the robot perception-action capabilities. This thesis presents research on how sensors and estimation techniques can be used in robot control. The suggested methods are theoretically analyzed and evaluated with a large focus on experimental verification in real-time settings. One application class treated is the ability to react fast and accurately to events detected by vision, which is demonstrated by the realization of a ball-catching robot. A new approach is proposed for performing high-speed color-based image analysis that is robust to varying illumination conditions and motion blur. Furthermore, a method for object tracking is presented along with a novel way of Kalman-filter initialization that can handle initial-state estimates with infinite variance. A second application class treated is robotic assembly using force control. A study of two assembly scenarios is presented, investigating the possibility of using force-controlled assembly in industrial robotics. Two new approaches for robotic contact-force estimation without any force sensor are presented and validated in assembly operations. The treated topics represent some of the challenges in sensor-based robot control, and it is demonstrated how they can be used to extend the functionality of industrial robots

    Vision Based Tracker for Dart Catching Robot

    Get PDF
    The objective of this thesis has been to develop the foundation for a robot system that catches darts. A dart board is to be mounted on a robot. When a dart is thrown at the board, it is detected by cameras, and an algorithm predicts where the dart will hit the board. The goal is then to move the board in such a way that the dart hits at the desired coordinate, typically the bull's eye. This report describes the different components developed to realize this system, including image acquisition, camera calibration, image analysis and modeling and estimation of the dart trajectory

    Robotic Work-Space Sensing and Control

    Get PDF
    Industrial robots are traditionally programmed using only the internal joint position sensors, in a sense leaving the robot blind and numb. Using external sensors, such as cameras and force sensors, allows the robot to detect the existence and position of objects in an unstructured environment, and to handle contact situations not possible using only position control. This thesis presents work on how external sensors can be used in robot control. A vision-based robotic ball-catcher was implemented, showing how high-speed computer vision can be used for robot control with hard time constraints. Special attention is payed to tracking of a flying ball with an arbitrary number of cameras, how to initialize the tracker when no information about the initial state is available, and how to dynamically update the robot trajectory when the end point of the trajectory is modified due to new measurements. In another application example, force control was used to perform robotic assembly. It is shown how force sensing can be used to handle uncertain position

    Color-Based Detection Robust to Varying Illumination Spectrum

    Get PDF
    In color-based detection methods, varying illumination often causes problems, since an object may be perceived to have different colors under different lighting conditions. In the field of color constancy this is usually handled by estimating the illumination spectrum and accounting for its effect on the perceived color. In this paper a method for designing a robust classifier is presented, i.e., instead of estimating and adapting to different lighting conditions, the classifier is made wider to detect a colored object for a given range of lighting conditions. This strategy also naturally handles the case where different parts of an object are illuminated by different light sources at the same time. Only one set of training data per light source has to be collected, and then the detector can handle any combination of the light sources for a large range of illumination intensities

    Nonlinear lateral control strategy for nonholonomic vehicles

    Get PDF
    This paper proposes an intuitive nonlinear lateral control strategy for trajectory tracking in autonomous nonholonomic vehicles. The controller has been implemented and verified in Alice, Team Caltech's contribution to the 2007 DARPA Urban Challenge competition for autonomous motorcars. A kinematic model is derived. The control law is described and analyzed. Results from simulations and field tests are given and evaluated. Finally, the key features of the proposed controller are reviewed, followed by a discussion of some limitations of the proposed strategy

    Robotic Assembly Using a Singularity-Free Orientation Representation Based on Quaternions

    Get PDF
    New robotic applications often require physical interaction between the robot and its environment. To this purpose, external sensors might be needed, as well as a suitable way to specify the tasks. One complication that might cause problems in the task execution is orientation representation singularities. In this paper quaternions are used as a singularity-free orientation representation within the constraint-based task specification framework. The approach is experimentally verified in a force controlled assembly task. The task chosen contains a redundant degree of freedom that is exploited using the constraint-based task specification framework

    Vision Based Tracker for Dart-Catching Robot

    Get PDF
    This paper describes how high-speed computer vision can be used in a motion control application. The specific application investigated is a dart catching robot. Computer vision is used to detect a flying dart and a filtering algorithm predicts its future trajectory. This will give data to a robot controller allowing it to catch the dart. The performance of the implemented components indicates that the dart catching application can be made to work well. Conclusions are also made about what features of the system are critical for performance

    Initialization of the Kalman Filter without Assumptions on the Initial State

    Get PDF
    In absence of covariance data, Kalman filters are usually initialized by guessing the initial state. Making the variance of the initial state estimate large makes sure that the estimate converges quickly and that the influence of the initial guess soon will be negligible. If, however, only very few measurements are available during the estimation process and an estimate is wanted as soon as possible, this might not be enough. This paper presents a method to initialize the Kalman filter without any knowledge about the distribution of the initial state and without making any guesses

    Detection of Contact Force Transients in Robotic Assembly

    Get PDF
    A robotic assembly task is usually implemented as a sequence of simple motions, and the transitions between the motions are made when some events occur. These events can usually be detected with thresholds on some signal, but faster response is possible by detecting the transient on that signal. This paper considers the problem of detecting these transients. A force-controlled assembly task is used as an experimental case, and transients in measured force/torque data are considered. A systematic approach to train machine-learning based classifiers is presented. The classifiers are further implemented in the assembly task, resulting in a 15 % reduction of the total assembly time

    Adaptation of Force Control Parameters in Robotic Assembly

    Get PDF
    Industrial robots are usually programmed to follow desired trajectories, and are very good at position-controlled tasks. New applications, however, often require physical contact between the robot and its environment, and then the position control accuracy is generally not sufficient. Force control is a suitable alternative. The environment is often stiff, and then it is crucial to design appropriate force controllers, which is non-trivial for a robot programmer. This paper presents an adaptive algorithm for choosing force control parameters, based on identification of a contact model. The algorithm is experimentally verified in an assembly task with an industrial robot
    corecore